chore: Lagrange polynomial cache refactor#522
Conversation
Consolidated Tests Results 2026-04-27 - 15:25:17Test ResultsDetails
test-reporter: Run #1706
🎉 All tests passed!TestsView All Tests
🍂 No flaky tests in this run. Github Test Reporter by CTRF 💚 🔄 This comment has been updated |
deca7ef to
d0942dc
Compare
dvdplm
left a comment
There was a problem hiding this comment.
The code here LGTM. I have one nag about the architecture which is not new to this PR.
We initialize one static for each Field impl that we have, but each LAGRANGE_STORE is defined outside the trait impl. We expect it to be there, but have no compiler guarantee that it is actually there. Furthermore, each LAGRANGE_STORE is actually parametrized on the number of parties and the threshold. It works here, because we tightly control the call sites (two in total), but again there is no compile time guarantee that the LAGRANGE_STORE I'm using is parametrized the way I expect. Code has to trust that the store was initialized somewhere in the proper way. It's up to us to know when/how to call init_all_lagrange_stores.
The third nitpick I have is about init_all_lagrange_stores: is the algebra crate the right place for this? One could make the argument that initializing a cache that depends on protocol parameters (parties&threshold) should happen in (or close to) the protocol. I am not sure what my own opinion actually is on this, just thought I'd mention it.
I'm approving this because it is a clear improvement on the old code, but yeah, there's something about a runtime-parametrized cache held in a "free" static that is awkward.
Agreed with that, one solution to this might be to only have this cache for the small enough fields, and for those field generate all the possible Lagrange basis. In any case, for the bigger fields doing the caching is practically infeasible due to memory complexity, so for those we have to do on-the-fly computation. I guess the main drawback is that this assumes the complexity of the cache is maximal. |
|
By the way, I believe the |
…ame common to lagrange, feature gate lagrange, rename init
…om:zama-ai/kms into eudelins/chore/2932/refacto-lagrange-store
Description of changes
This PR refactors Lagrange polynomial caching in
threshold-algebrato avoid repeated interpolation setup work in hot paths.The previous implementation memoized Lagrange polynomials lazily per field type behind
RwLock<HashMap<...>>. This change replaces that with pre-computedOnceLock(orLazyLockfor smaller fields) stores and routes interpolation through cached Lagrange bases when they are available, with a fallback to direct computation when they are not.Note: For GF8 and GF16, the fields are small enough that we can compute all possible basis, so we don't need to know the number of parties and threshold beforehand.
For bigger field we have an init function, but TBD what we want to do long term (partly because if the
n choose tgrows too big we can't store all possible values)kms-serverinitializes the stores from the configured threshold peer list and thresholdthreshold-fheinitializes the stores from the loaded party topology on startupThis PR also:
lagrange_interpolation_with_polysso pre-computed bases can be reused directlyBench results
Below trying to reconstruct 8 batches (in parallel) of a 1000 shares with 13 parties and threshold 4 for a varying number of shares (from 5 to 13) for degree 4 extension of Z128 (ring we use in prod).
Issue ticket number and link
Closes https://github.com/zama-ai/kms-internal/issues/2932
PR Checklist
I attest that all checked items are satisfied. Any deviation is clearly justified above.
chore: ...).TODO(#issue).unwrap/expect/paniconly in tests or for invariant bugs (documented if present).devopslabel + infra notified + infra-team reviewer assigned.!and affected teams notified.Zeroize+ZeroizeOnDropimplemented.unsafe; if unavoidable: minimal, justified, documented, and test/fuzz covered.Dependency Update Questionnaire (only if deps changed or added)
Answer in the
Cargo.tomlnext to the dependency (or here if updating):More details and explanations for the checklist and dependency updates can be found in CONTRIBUTING.md